VSCode one

您所在的位置:网站首页 vscode online VSCode one

VSCode one

2024-07-13 07:20:36| 来源: 网络整理| 查看: 265

Abstract: This example focuses on the real AI demand scenario and introduces the use process of VSCode's one-click access to the Notebook experience algorithm suite to quickly complete the water meter reading.

This article is shared from Huawei Cloud Community "VSCode One-key Access Notebook Experience Algorithm Kit to Quickly Complete Water Meter Reading", author: HWCloudAI.

This example focuses on the real AI demand scenario and introduces the use process of VSCode's one-click access to the Notebook experience algorithm suite to quickly complete the water meter reading.

The algorithm development kit currently provides two sets of algorithm assets, self-developed (ivg series) and open source (mm series), which can be applied to tasks such as classification, detection, segmentation, and OCR. In this example, the self-developed segmentation algorithm (ivgSegmentation) and the open source OCR algorithm (mmOCR) will be combined to complete the water meter reading recognition project, and the algorithm development kit will be used to deploy it as an online service on HUAWEI CLOUD.

illustrate:

This case tutorial is only applicable to the "North China-Beijing 4" region, the new version of Notebook.

prepare data Log in to the OBS console, create an OBS object bucket, and select "North China-Beijing 4" as the region. Log in to the ModelArts console, and select the console region as "North China-Beijing 4". On the Global Configuration page, check whether authorization has been configured to allow ModelArts to access OBS. If authorization is not configured, refer to Configuring Access Authorization (Global Configuration) to add authorization. Download the data set of this case, the water meter dial segmentation data set and the water meter dial reading OCR recognition data set into the OBS bucket. The OBS path example is as follows

obs://{OBS bucket name}/water_meter_segmentation water meter dial segmentation data set

obs://{OBS bucket name}/water_meter_crop Water meter dial reading OCR recognition data set

illustrate:

Downloading datasets from AIGallery is free, but a small fee will be charged for storing datasets in OBS buckets. For details, please refer to the OBS price details page. Please clear resources and data in time after the use of the case is completed.

Prepare the development environment

On the "ModelArts Console > Development Environment > Notebook (New)" page, create a Notebook based on the pytorch1.4-cuda10.1-cudnn7-ubuntu18.04 image and type GPU. For details, see the section Creating Notebook Instances.

In this case, you need to use VS Code to remotely connect to Notebook, and you need to enable SSH remote development.

Figure 1 Create a Notebook instance

1. The key file of the instance needs to be downloaded to the following local directory or its subdirectory:

Windows: C:\Users{{user}}

Mac/Linux: Users/{{user}}

2. In the ModelArts Console -> Development Environment Notebook, click "More > VS Code Access" in the "Operation" column.

If VS Code has been installed locally, click "Open" to enter the "Visual Studio Code" page.

If VS Code is not installed locally, please select "win" or "other" to download and install VS Code. For VS Code installation, please refer to Installing VS Code Software.

If the user has not installed the ModelArts VS Code plug-in before, an installation prompt will pop up at this time, please click "Install and Open" to install; if the plug-in has been installed before, there will be no such prompt, please skip this step and directly follow the next steps

The installation process is expected to take 1 to 2 minutes. After the installation is complete, a dialog box will pop up in the lower right corner. Please click "Reload Window and Open".

In the prompt that pops up, check "Don't ask again for this extension", and then click "Open".

3. Connect to the Notebook instance remotely.

Before the remote connection is executed, it will automatically search for the key according to the key name in the directory (Windows: C:\Users{{user}}.ssh or downloads, Mac/Linux: Users/{{user}}/.ssh or downloads). key file, if found, directly use the key to open a new window and try to connect to the remote instance, no need to select the key at this time. If not found, a selection box will pop up, please select the correct key according to the prompts. If the wrong key is selected, a prompt message will pop up, please select the correct key according to the prompt message. When a pop-up prompts that the connection to the instance fails, please close the pop-up window and check the output log in the OUTPUT window. Please check the FAQ and find out the cause of the failure. Develop with Algorithm Suite Step1 Create an algorithm project

1. After successful access, click File -> Open Folder on the VS Code page, and select the following folder to open

2. Create a new terminal

3. Execute in the work directory

ma-cli createproject

command to create a project, and enter the project name according to the prompt, for example: water_meter. Then press Enter directly to select the default parameters, and choose to skip the asset installation step (choose 6).

4. Execute the following command to enter the project directory.

cd water_meter

5. Execute the following command to copy the project data to Notebook.

python manage.py copy --source {obs_dataset_path} --dest ./data/raw/water_meter_crop python manage.py copy --source {obs_dataset_path} --dest ./data/raw/water_meter_segmentation

illustrate:

The {obs_dataset_path} path is the dataset downloaded to OBS in Step1 Prepare Data, such as "obs://{OBS bucket name}/water_meter_segmentation" and "obs://{OBS bucket name}/water_meter_crop"

Step2 Use deeplabv3 to complete the water meter area segmentation task

1. First install the ivgSegmentation suite.

python manage.py install algorithm ivgSegmentation== 1.0 . 2

If it prompts that the version of ivgSegmentation is incorrect, you can query the version through the command python manage.py list algorithm.

2. After installing the ivgSegmentation suite, enter the "./algorithms/ivgSegmentation/config/sample" folder in the project directory on the left side of the interface to view the currently supported segmentation models. Take sample as an example (the default algorithm of sample is deeplabv3), The folder includes config.py (algorithm shell configuration) and deeplabv3_resnet50_standard-sample_512x1024.py (model structure).

3. Dial segmentation only needs to distinguish the background and reading area, so it belongs to two classifications, and the configuration file needs to be modified according to the data set required by the project, as follows:

Modify the ./algorithms/ivgSegmentation/config/sample/config.py file.

#config.py alg_cfg = dict( ... data_root = ' data/raw/water_meter_segmentation ' , # modify to the real path and locally segment the dataset path ... )

Press Ctrl+S to save after modification.

4. Modify the ./algorithms/ivgSegmentation/config/sample/deeplabv3_resnet50_standard-sample_512x1024.py file.

# deeplabv3_resnet50_standard-sample_512x1024.py gpus =[ 0 ] ... data_cfg = dict( ... num_classes = 2 , # modify to 2 classes ... ... train_scale =( 512 , 512 ), # (h, w)#size all modified to ( 512 , 512 ) ... train_crop_size =( 512 , 512 ), # (h, w) ...test_scale =( 512 , 512 ), # (h, w) ... infer_scale =( 512 , 512 ), # (h, w) )

5. Press Ctrl+S to save after modification.

6. In the water_meter project directory, install the deeplabv3 pre-training model.

python manage.py install model ivgSegmentation:deeplab/deeplabv3_resnet50_cityscapes_512x1024

7. Train the segmentation model. (GPU is recommended for training)

#shell python manage.py run --cfg algorithms/ivgSegmentation/config/sample/config.py --gpus 0

The trained model will be saved in the specified location, which is output/deeplabv3_resnet50_standard-sample_512x1024/checkpoints/ by default.

8. Verify the effect of the model.

After the model training is complete, you can calculate the model's indicators on the verification set, first modify the model location in the configuration file.

Modify ./algorithms/ivgSegmentation/config/sample/config.py.

#config.py alg_cfg = dict( ... load_from = ' ./output/deeplabv3_resnet50_standard-sample_512x1024/checkpoints/checkpoint_best.pth.tar ' , # Modify the path of the training model ... ) #shell python manage.py run --cfg algorithms/ivgSegmentation/config/sample/config.py --pipeline evaluate

9. Model reasoning.

Model reasoning can specify a certain picture, and infer the segmented area of ​​the picture, and visualize it. First, you need to specify the picture path that needs to be reasoned.

Modify ./algorithms/ivgSegmentation/config/sample/config.py

alg_cfg = dict( ... img_file = ' ./data/raw/water_meter_segmentation/image/train_10.jpg ' # Specify the image path that needs to be reasoned ... )

Execute the following command to infer the effect of the model.

#shell python manage.py run --cfg algorithms/ivgSegmentation/config/sample/config.py --pipeline infer

The image path of the inference output is under ./output/deeplabv3_resnet50_standard-sample_512x1024.

10. Export the SDK.

The algorithm development kit supports exporting the model into a model SDK to facilitate downstream tasks such as model deployment.

#shell python manage.py export --cfg algorithms/ivgSegmentation/config/sample/config.py --is_deploy Step3 water meter reading identification

1. First install the mmocr suite.

python manage.py install algorithm mmocr

2. After installing the mmocr suite, the ./algorithms/mmocr/config/textrecog folder includes config.py (algorithm shell configuration), and the configuration file needs to be modified according to the required algorithm and data set path. The robust_scanner algorithm is taken as an example below.

Modify ./algorithms/mmocr/algorithm/configs/textrecog/robustscanner_r31_academic.py,

# robustscanner_r31_academic.py ... train_prefix = ' data/raw/water_meter_crop/ ' # Modify the data set path to water meter ocr recognition data set path train_img_prefix1 = train_prefix + ' train ' train_ann_file1 = train_prefix + ' train.txt ' test_prefix = ' data/raw/water_meter_crop/ ' test_img_prefix1 = test_prefix + 'val' test_ann_file1 = test_prefix + 'val.txt'

3. Install the robust_scanner pre-training model.

python manage.py install model mmocr:textrecog/robust_scanner/robustscanner_r31_academic

4. Train the OCR model.

When using mmcv for the first time, you need to compile mmcv-full. This process is slow, and you can directly use the official precompiled dependency package.

Precompiled package URL: https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html

pip install https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/mmcv_full-1.3.8-cp37-cp37m-manylinux1_x86_64.whl

Change the epoch (number of iterations) in ./algorithms/mmocr/config/textrecog/config.py to 2, as shown in the following figure:

python manage.py run --cfg algorithms/mmocr/config/textrecog/config.py

The trained model will be saved in the specified location, which is output/${algorithm} by default.

5. Verify the effect of the model.

After the model training is complete, you can calculate the model's indicators on the verification set, first modify the model location in the configuration file.

Modify ./algorithms/mmocr/config/textrecog/config.py

#config.py ... model_path = ' ./output/robustscanner_r31_academic/latest.pth ' ... #shell python manage.py run --cfg algorithms/mmocr/config/textrecog/config.py --pipeline evaluate

6. Model reasoning.

Model reasoning can specify a certain picture, and infer the segmented area of ​​the picture, and visualize it. First, you need to specify the image path to be reasoned, and modify the algorithms/mmocr/config/textrecog/config.py file, as follows.

Modify ./algorithms/mmocr/algorithm/configs/textrecog/robust_scanner/config.py

... infer_img_file = ' ./data/raw/water_meter_crop/val/train_10.jpg ' # Specify the image path that needs to be inferred ... #shell python manage.py run --cfg algorithms/mmocr/config/textrecog/config.py --pipeline infer

The image path of the inference output is under output/robustscanner_r31_academic/vis

7. Export the SDK.

#shell python manage.py export --cfg algorithms/mmocr/config/textrecog/config.py Step4 deploy as online service

This demonstration only deploys the OCR service, including local deployment and online deployment. After the deployment is online, the deployment service is called to infer the local image and obtain the predicted reading of the water meter. To deploy online services, you need to specify an OBS bucket to save the files required for deployment.

1. Configure the OBS bucket in the algorithms/mmocr/config/textrecog/config.py file, namely obs_bucket=.

2. Execute the following command:

python manage.py export --cfg algorithms/mmocr/config/textrecog/config.py -- is_deploy # export deployment model python manage.py deploy --cfg algorithms/mmocr/config/textrecog/ config.py # local deployment python manage.py deploy --cfg algorithms/mmocr/config/textrecog/config.py --launch_remote#Online deployment, it will take a while, please be patient

Click here to view the successfully deployed online services

Step5 Clear resources and data

After learning through this example and completing the process of creating an algorithm suite, if you no longer use it, it is recommended that you clear the relevant resources to avoid waste of resources and unnecessary costs.

Stop Notebook: On the Notebook page, click Stop in the operation column of the corresponding instance. Delete data: Click here to go to the OBS console, delete the uploaded data, and then delete the folder and OBS bucket.

 

Click to follow and learn about Huawei Cloud's fresh technologies for the first time~



【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭